Reconstruction of Super Resolution High Dynamic Range Image from Multiple-Exposure Images
نویسندگان
چکیده
Recent research efforts have focused on combining high dynamic range (HDR) imaging with super-resolution (SR) reconstruction to enhance both the intensity range and resolution of images beyond the apparent limits of the sensors that capture them. The processes developed to date start with a set of multipleexposure input images with low dynamic range (LDR) and low resolution (LR), and require several procedural steps: conversion from LDR to HDR, SR reconstruction, and tone mapping. Input images captured with irregular exposure steps have an impact on the quality of the output images from this process. In this paper, we present a simplified framework to replace the separate procedures of previous methods that is also robust to different sets of input images. The proposed method first calculates weight maps to determine the best visible parts of the input images. The weight maps are then applied directly to SR reconstruction, and the best visible parts for the dark and highlighted areas of each input image are preserved without LDR-to-HDR conversion, resulting in high dynamic range. A new luminance control factor (LCF) is used during SR reconstruction to adjust the luminance of input images captured during irregular exposure steps and ensure acceptable luminance of the resulting output images. Experimental results show that the proposed method produces SR images of HDR quality with luminance compensation. Introduction Many efforts have been made to enhance the image quality of digital still cameras by improving the physical performance of their image sensors. Even so, digital cameras still suffer from limited dynamic range and resolution that is less than that encountered in the real world. High dynamic range (HDR) imaging algorithms have been developed to overcome the problem of underexposed or overexposed images caused by the narrow dynamic range of cameras. This involves assembling multiple-exposure low dynamic range (LDR) images from a normal camera to obtain a full dynamic range image [1]. However, HDR imaging techniques require that the camera response curve (CRC) for each camera is capable of recovering the intensity of an actual scene [2]. Additionally, many common displays have a limited dynamic range and cannot display HDR images directly; such displays require a tone-mapping process to compresses the dynamic range of the image to fit their dynamic ranges [2]. Many tone-mapping algorithms have been proposed, but each is tailored to a specific purpose and thus has its own advantages and disadvantages. Thus, the quality of HDR images is not always preserved. The limited resolution of normal digital cameras has been overcome with SR reconstruction, which increases the spatial resolution by exploiting the correlation of several sequential input images obtained by the camera under identical conditions. Because SR reconstruction requires the use of multiple images with the same exposure time, combining it with HDR imaging is difficult because the later requires multiple-exposure images. Even so, recent studies have addressed the challenge of combining HDR images and SR reconstruction to obtain high-quality, highresolution images with high dynamic range. Gunturk sought to obtain HDR-SR images, proposing a new imaging model during SR reconstruction that included dynamic range and spatial domain effects [3]. Figure 1 and the following equation describe that imaging model: ( ) k k k k k f t q v = + + z H w (1) where zk is a low-resolution observation for the k LDR-LR image with Y channel from YCbCr values, f(·) is the nonlinear camera response function, tk is the exposure time, q is a high-resolution input signal, and Hk is the linear mapping that incorporates motion, the point spread function, and down-sampling. Although the resulting images contain HDR information, presenting them on normal displays requires tone mapping, which may not give satisfactory results depending on the tone mapping method used. Schubert also suggested a framework for combining HDR and SR [4]. He used a similar imaging model that included photometric camera calibration data obtained using Debevec’s method and tone mapping [2]. Both methods require estimating the CRC function to obtain the dynamic range of the actual scene; however, tone mapping is also required and produces results that are not always satisfactory. Indeed, estimation of the real-world CRC function is not a simple task. Furthermore, each camera requires its own CRC function, and tone-mapping algorithms have a very significant influence on the quality of the resulting images. Here, we propose a new framework for obtaining high-quality images that appear to have high dynamic range and super resolution. Because it blends multiple-exposure images, the framework is robust in the face of input images from irregular exposure steps. The proposed method first calculates weight maps, Figure 1. Imaging model by Gunturk. which it then applies directly to SR reconstruction with pyramid merging [5, 6]. These weight maps are obtained by retaining only 19th Color and Imaging Conference Final Program and Proceedings 195 the best visible parts in each multiple-exposure input image to preserve details of dark and highlighted regions during SR reconstruction and produce SR output images that have HDR quality without LDR-to-HDR conversion [5]. However, the use of weight maps alone does not result in suitable luminance in output images because of the set of input images from irregular exposure steps. Thus, a new luminance control factor (LCF) is proposed for application during SR reconstruction to correct this. The resulting images have suitable resolution in both light and dark sections. SR reconstruction based on weight maps The overall objective of this work was to obtain an SR image that has fine details in light and dark sections. Figure 2 is a flowchart for the proposed method assuming four input images with proper exposure steps. The specific procedure has two parts: color reproduction and SR reconstruction from multiple-exposure images. During color reproduction, exposure fusion is performed on a series of LDR-LR input images to produce a single HDR-LR image [5]. Then, only CbCr values are extracted and resized for use in the last conversion from YCbCr to digital RGB values. Next, SR reconstruction takes place assuming perfect image registration. During this process, weight maps are first calculated to determine the necessary areas in each input image that contain the “best” intensities. These weight maps are then applied to SR reconstruction with pyramid merging to prevent intensity distortion from unwanted iterations. Simultaneously, LCFs (ηk) are applied to each input image to adjust the luminance so that the overall luminance of the output image is suitable. Weight map construction The exposure fusion algorithm generates high-quality images such as those that result from HDR imaging [5]. That is, the resulting images show fine details in dark and lighted areas. This algorithm selects the “best” parts of images in a multi-exposure image sequence. These “best” parts are defined as a weighted map Figure 2. Flow of the proposed method. based on a combination of quality measures, such as contrast, saturation, and good exposure [5], and this map is used to blend the input images. The contrast measure C is calculated as the absolute value of the Laplacian-filtered image for each gray-scaled image. The saturation measure S is obtained as the standard deviation within the R, G, and B channels at each pixel. The measure of good exposure E indicates how well a pixel is exposed, and it is used as a weight of the intensity, based on how close it is to 0.5 using a Gaussian curve defined as e. This is applied to each channel. The final weight maps for each k (k = 1, .., N) image are defined as follows: , , , , ij k ij k ij k ij k W C S E = × × (2) where i and j are the location of pixels, and k is the index of the input sequence images. Figure 3. A set of test input images. Figure 4. Weight maps for each input image. 196 ©2011 Society for Imaging Science and Technology Figure 3 shows a test set of multiple-exposure image sequences with RGB channels with Tiff format. Because the resulting image is affected by the set of input images, seven images were prepared from the auto-exposed image using a Canon 5D Mark 2 camera with fixed aperture value of 5.0 for use a as reference image. Figure 4 shows the results of weight maps for each input image. Dark and highlighted areas have low values on the weight maps; high values indicate the “best” areas of the input images. Proposed SR reconstruction We first assume the imaging model to be the following: k k k z q v = + H (3) where zk, Hk, q, vk, and k are the same from equation (1). If the image registration is perfect, then Hk is composed of spatial warping, blurring, and down-sampling. In the deterministic approach of SR reconstruction, the inverse imaging model can be solved by choosing q to minimize the following cost function:
منابع مشابه
Simultaneous High Dynamic Range and Superresolution Imaging without Regularization
Modern digital cameras sensors only capture a limited amplitude and frequency range of the irradiance of a scene. A recent trend is to acquire and combine multiple images to raise the quality of the final image. Multi-image techniques are used in high dynamic range processing, where multiple exposure times are used to reconstruct the full range of irradiance. With multi-image super-resolution, ...
متن کاملMultidimensional image enhancement from a set of unregistered and differently exposed images
If multiple images of a scene are available instead of a single image, we can use the additional information conveyed by the set of images to generate a higher quality image. This can be done along multiple dimensions. Super-resolution algorithms use a set of shifted and rotated low resolution images to create a high resolution image. High dynamic range imaging techniques combine images with di...
متن کاملRobust Fuzzy Content Based Regularization Technique in Super Resolution Imaging
Super-resolution (SR) aims to overcome the ill-posed conditions of image acquisition. SR facilitates scene recognition from low-resolution image(s). Generally assumes that high and low resolution images share similar intrinsic geometries. Various approaches have tried to aggregate the informative details of multiple low-resolution images into a high-resolution one. In this paper, we present a n...
متن کاملA Deep Model for Super-resolution Enhancement from a Single Image
This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model benefits from high frequency and low frequency features extracted from deep and shallow networks...
متن کاملHigh Dynamic Range Image Reconstruction with Spatial Resolution Enhancement
For the last two decades, two related approaches have been studied independently in conjunction with limitations of image sensors. The one is to reconstruct a high-resolution (HR) image from multiple low-resolution (LR) observations suffering from various degradations such as blur, geometric deformation, aliasing, noise, spatial sampling and so on. The other one is to reconstruct a high dynamic...
متن کامل